263 research outputs found

    Ivan\u27s Letter (Part 1)

    Get PDF
    The following cipher puzzle appeared in the May 1930 issue of The Enigma, the official publication of the National Puzzlers League. Erik Bodin offered a $10 to the first person to discover the secret message in Ivan\u27s letter, hinting only that the letter encoded the name of a point to be attacked, the date of the attack, and the troops involved . The cipher is unquestionably difficult; according to a brief not in the October 1930 Enigma, no one ever solved the puzzle. In the original article, the letter is presented in handwritten form; the slightly modified typewritten version given below preserves (and, in fact, makes somewhat easier to detect) the hidden message. The second half of the article, giving the solution to the cipher, will appear in the next issue of Word Ways

    Bayesian inference by active sampling

    Get PDF

    GlowBots: Robots that Evolve Relationships

    Get PDF
    GlowBots are small wheeled robots that develop complex relationships between each other and with their owner. They develop attractive patterns which are affected both by user interaction and communication between the robots. The project shows how robots can interact with humans in subtle and sustainable ways for entertainment and enjoyment

    Compositional Uncertainty in Deep Gaussian Processes

    Get PDF
    Gaussian processes (GPs) are nonparametric priors over functions. Fitting a GP implies computing a posterior distribution of functions consistent with the observed data. Similarly, deep Gaussian processes (DGPs) should allow us to compute a posterior distribution of compositions of multiple functions giving rise to the observations. However, exact Bayesian inference is intractable for DGPs, motivating the use of various approximations. We show that the application of simplifying mean-field assumptions across the hierarchy leads to the layers of a DGP collapsing to near-deterministic transformations. We argue that such an inference scheme is suboptimal, not taking advantage of the potential of the model to discover the compositional structure in the data. To address this issue, we examine alternative variational inference schemes allowing for dependencies across different layers and discuss their advantages and limitations.Comment: 17 page

    Modulating Surrogates for Bayesian Optimization

    Get PDF
    Bayesian optimization (BO) methods often rely on the assumption that the objective function is well-behaved, but in practice, this is seldom true for real-world objectives even if noise-free observations can be collected. Common approaches, which try to model the objective as precisely as possible, often fail to make progress by spending too many evaluations modeling irrelevant details. We address this issue by proposing surrogate models that focus on the well-behaved structure in the objective function, which is informative for search, while ignoring detrimental structure that is challenging to model from few observations. First, we demonstrate that surrogate models with appropriate noise distributions can absorb challenging structures in the objective function by treating them as irreducible uncertainty. Secondly, we show that a latent Gaussian process is an excellent surrogate for this purpose, comparing with Gaussian processes with standard noise distributions. We perform numerous experiments on a range of BO benchmarks and find that our approach improves reliability and performance when faced with challenging objective functions
    • …
    corecore